ContextCapture User Guide

Navigation Modes

Shift between Orbit and Pan mode to select how to control the camera with mouse buttons.

Lock on Photo

This mode activates an immersive view of the scene through a selected photo.

Photo is aligned on the scene thanks to photo and photogroup parameters (pose and optical properties). Navigation is locked on photo viewpoint.

Use Photo overlay settings to change image plane depth and opacity.

Lock on photo shows simultaneously 3D objects and photographs in a same space. Using photos aligned on 3D model allows to cross-compare 3D data with reference data (quality control, etc.), and enhances 3D views with reliable context and details.

Example of applications:

  • Understand acquisition/scene.
  • Check 3D data alignment with photos: scan, tie points, mesh, etcetera.
  • Identify/understand reality mesh misconstructions.
  • Enhance 3D view inspection: details, context, etcetera.

Photo Navigation

Use this mode to link the 3D models and the photos. You can navigate through the photos and see the fit to corresponding viewpoints in the 3D model.

Photo Navigation mode

Enable the photo panel from the View Layout menu for a better user experience.

Click on the 3D model or on the photo panel to specify a target point, get the best photos viewing this point in the bottom ribbon, and locate the target simultaneously on the 3D model and on the photo.

The selected photo on the right fits to the selected target in the 3D view. Use the auto-sync photo panel mode to synchronize photo when target is moved in the 3D view.

View point in the 3D view fits the selected target in the photo. Use the auto-sync 3D view mode to synchronize 3D display when target is moved on the photo. Use the Zoom on photo button to reset initial view point in the 3D view on the selected target in the photo.

Thumbnails show the photos actually viewing the target and sorted by preference. Disabling the target restores all photos.

Enable Fast Filtering mode to ignore occlusion for photo filtering and sorting.

In Photo Navigation mode you also can display a 3D model previously produced from this block if the format is supported. You can select the displayed 3D mesh from the Display Styles drawer.

Filter Tools

Filter photos by name or by camera device (photogroup).

Display Styles

The drawer controls the visibility and display style of the 3D view items.

Some options may be disabled according to the displayed data.

See also chapter Block 3D view > Contents.

Basemaps

Basemaps drawer allows you to add and manage basemap layers. (See also Basemap manager.)

Measurement Tools

Measurements

Opens the measurement tool.

The following measurements are available:

  • Location: get accurate 3D coordinates of a point in a given coordinate system.
  • Distance: get 3D distance and height difference between two points.
  • Area: get polygon area and contour length.
  • Volume: get volume measurement between the 3D model and a reference plane in a polygon.

Quality Metrics

The Quality Metrics allows a quick 3D analysis of aerotriangulation result. A variety of metrics are proposed, ranging from scene coverage to position uncertainty for photos and survey points.

Access the metrics by clicking on the "Quality Metrics" button: .

Subsequently, click on the metric thumbnail to access the entire list of metrics.

Click on the metric thumbnail to access the entire list of metrics.

The Quality Metrics dialog, with the current metric and the list of all available metrics.

Proposed metrics:

  1. Scene
    • Coverage: indicates the number of photos that potentially see each area. Note: occlusions are ignored, points that are hidden by other parts of the scene are considered as visible if they are in the field of view of the photo.
    • Height: colors the generated tie points according to their vertical position.
    • Resolution: displays the ground resolution (in 3D units/px) for each generated tie point.
  2. Surveys
    • Position uncertainty: exposes a visual representation of how certain are the image positions of each survey point. Transparent spheres around the survey points indicate how uncertain is the position (scaled for readability). Colored ellipses show the most uncertain direction and its magnitude. Note: only available if surveys are present in the scene.
    • Control point distribution: indicates how far from a control point are the generated tie points. To calculate this distance, we consider the photos and the link between photos. Note: only available if control points are present in the scene.
  3. Photos
    • Position uncertainty: illustrates to what point the ContextCapture optimization is certain of the estimated photo positions. Transparent spheres around photos indicate the position uncertainty (scaled for readability). Colored ellipses show the most uncertain direction and its magnitude.
    • Number of tie points: colors the photos according to the number of tie points linked to them.
    • Distance to input positions: shows the offset between the input positions and the computed photo positions. Colors indicate the magnitude, and lines indicate the direction of the change in position. Note: only available if input positions are provided.
    • Distance to input rotations: illustrates the difference between the input rotations and the computed photo rotations. Colors indicate the angle difference. Note: only available if input rotations are provided.
    • Connection graph: displays links between photos; two photos are paired together if they have tie points in common.
  4. Tie Points
    • Number of observing photos: represents the number of photos that have been used to define each point, smoothed over the set of photos seeing the point. The averagemean value is used in order to provide a comprehensive information about the scene and filter noisy data.
    • Reprojection error: considers the pixel reprojection error for each tie point.
    • Position uncertainty: indicates the individual tie point position uncertainty, averaged over the photos seeing the point. This metric illustrates how well placed can we expect the generated tie point to be after the ContextCapture optimization.